Goto

Collaborating Authors

 gradient descent algorithm






A Complete Algorithms

Neural Information Processing Systems

In Section B, we provide some preliminaries. In Section C, we provide sparsity analysis. We show convergence analysis in Section D. In Section E, we show how to combine the sparsity, convergence, running time all together. In Section F, we show correlation between sparsity and spectral gap of Hessian in neural tangent kernel. In Section G, we discuss how to generalize our result to quantum setting.


c164bbc9d6c72a52c599bbb43d8db8e1-Paper.pdf

Neural Information Processing Systems

Deep neural networks have achieved impressive performance in many areas. Designing a fast and provable method for training neural networks is a fundamental question in machine learning. The classical training method requires paying Ω(mnd) cost for both forward computation and backward computation, where m is the width of the neural network, and we are given n training points in d-dimensional space.


8b9e7ab295e87570551db122a04c6f7c-Supplemental.pdf

Neural Information Processing Systems

Neural transport augmented sampling, firstintroduced byParnoandMarzouk (2018),isageneral method for using normalizing flows to sample from a given densityπ. Thus, samples can be generated fromπ(θ)by running MCMC chain in theZ-space and pushing these samples onto theΘ-space usingT. Neural transport augmented samplers havebeen subsequently extended by Hoffman etal. In this paper, we proposed equivariant Stein variational gradient descent algorithm for sampling fromdensities thatareinvarianttosymmetry transformations. Another contributionofourworkis subsequently using this equivariant sampling method to efficiently train equivariant energy based models forprobabilistic modeling andinference.




An Accelerated Algorithm for Stochastic Bilevel Optimization under Unbounded Smoothness

Neural Information Processing Systems

This paper investigates a class of stochastic bilevel optimization problems where the upper-level function is nonconvex with potentially unbounded smoothness and the lower-level problem is strongly convex. These problems have significant applications in sequential data learning, such as text classification using recurrent neural networks. The unbounded smoothness is characterized by the smoothness constant of the upper-level function scaling linearly with the gradient norm, lacking a uniform upper bound. Existing state-of-the-art algorithms require $\widetilde{O}(\epsilon^{-4})$ oracle calls of stochastic gradient or Hessian/Jacobian-vector product to find an $\epsilon$-stationary point. However, it remains unclear if we can further improve the convergence rate when the assumptions for the function in the population level also hold for each random realization almost surely (e.g., Lipschitzness of each realization of the stochastic gradient).